13 research outputs found

    Formative peer assessment in a CSCL environment

    Get PDF
    In this case study our aim was to gain more insight in the possibilities of qualitative formative peer assessment in a computer supported collaborative learning (CSCL) environment. An approach was chosen in which peer assessment was operationalised in assessment assignments and assessment tools that were embedded in the course material. The course concerned a higher education case-based virtual seminar, in which students were asked to conduct research and write a report in small multidisciplinary teams. The assessment assignments contained the discussion of assessment criteria, the assessment of a group report of a fellow group, and writing an assessment report. A list of feedback rules was one of the assessment tools. A qualitative oriented study was conducted, focussing on the attitude of students towards peer assessment and practical use of peer assessment assignments and tools. Results showed that students’ attitude towards peer assessment was positive and that assessment assignments had added value. However, not all students fulfilled all assessment assignments. Recommendations for implementation of peer assessment in CSCL environments as well as suggestions for future research are discussed

    Impact of feedback request forms and verbal feedback on higher education students' feedback perception, self-efficacy, and motivation

    Get PDF
    In higher education, students often misunderstand teachers’ written feedback. This is worrisome, since written feedback is the main form of feedback in higher education. Organising feedback conversations, in which feedback request forms and verbal feedback are used, is a promising intervention to prevent misunderstanding of written feedback. In this study a 2 × 2 factorial experiment (N = 128) was conducted to examine the effects of a feedback request form (with vs. without) and feedback mode (written vs. verbal feedback). Results showed that verbal feedback had a significantly higher impact on students’ feedback perception than written feedback; it did not improve students’ self-efficacy, or motivation. Feedback request forms did not improve students’ perceptions, self-efficacy, or motivation. Based on these results, we can conclude that students have positive feedback perceptions when teachers communicate their feedback verbally and more research is needed to investigate the use of feedback request forms

    Toward a better judgment of item relevance in progress testing

    No full text
    Abstract Background Items must be relevant to ensure item quality and test validity. Since “item relevance” has not been operationalized yet, we developed a rubric to define it. This study explores the influence of this rubric on the assessment of item relevance and on inter-rater agreement. Methods Members of the item review committee (RC) and students, teachers, and alumni (STA) reassessed the relevance of 50 previously used progress test (PT) items and decided about their inclusion using a 5-criteria rubric. Data were analyzed at item level using paired samples t-tests, Intraclass Correlation Coefficients (ICC), and linear regression analysis, and at rater level in a generalizability analysis per group. Results The proportion of items that the RC judged relevant enough to be included decreased substantially from 1.00 to 0.72 (p 0.7 across items. The relation between inclusion and relevance was strong (correlation = 0.89, p < 0.001), and did not differ between RC and STA. To achieve an acceptable inter-rater reliability for relevance and inclusion, 6 members must serve on the RC. Conclusions Use of the rubric results in a stricter evaluation of items’ appropriateness for inclusion in the PT and facilitates agreement between the RC and other stakeholders. Hence, it may help increase the acceptability and validity of the PT

    Fostering interactivity through peer assessment in web-based collaborative learning environments

    No full text
    Extant literature on collaborative learning shows that this instructional approach is widely used. In this chapter, the authors discuss the lack of alignment between collaborative learning and assessment practices. They will argue that peer assessment is a form of collaborative learning and a mode of assessment that perfectly fits the purpose of collaborative learning. As such, the authors purposefully depart from the more traditional application of assessment as a summative tool and advocate the consideration of formative peer assessment in collaborative learning. This shift towards formative assessment they believe has the potential to enhance learning. Their goal in this chapter is to review both shortcomings of current peer assessment practice as well as its potential for collaborative learning. Interactivity is central to foster the alignment between assessment and collaborative learning and the authors present a set of guidelines derived from research for increasing interactivity through formative peer assessment among peers in collaborative learning contexts

    The differential effects of task complexity on domain-specific and peer assessment skills

    No full text
    In this study the relationship between domain-specific skills and peer assessment skills as a function of task complexity is investigated. We hypothesised that peer assessment skills were superposed on domain-specific skills and will therefore suffer more when higher cognitive load is induced by increased task complexity. In a mixed factorial design with the between-subjects factor task complexity (simple, n?=?51; complex, n?=?59) and within-subjects factor task type (domain-specific, peer assessment), secondary school students studied four integrated study tasks, requiring them to learn a domain-specific skill (i.e. identifying the six steps of scientific research) and to learn how to assess a fictitious peer performing the same skill. Additionally, the students performed two domain-specific test tasks and two peer assessment test tasks. The interaction effect found on test performance supports our hypothesis. Implications for the teaching and learning of peer assessment skills are discussed

    The Balancing Act of Assessment Validity in Interprofessional Healthcare Education:A Qualitative Evaluation Study

    No full text
    CONSTRUCT &amp; BACKGROUND: In order to determine students' level of interprofessional competencies, there is a need for well-considered and thoroughly designed interprofessional assessments. Current literature about interprofessional assessments focuses largely on the development and validation of assessment instruments such as self-assessments or questionnaires to assess students' knowledge or attitudes. Less is known about the design and validity of integral types of assessment in interprofessional education, such as case-based assessments, or performance assessments. The aim of this study is to evaluate the evidence for and threats to the validity of the decisions about students' interprofessional performances based on such integral assessment task. We investigated whether the assessment prototype is a precursor to practice (authenticity) and whether the assessment provides valid information to determine the level of interprofessional competence (scoring). APPROACH: We used a design-based qualitative research design in which we conducted three group interviews with teachers, students, and interprofessional assessment experts. In semi-structured group interviews, participants evaluated the evidence for and threats to the validity of an interprofessional assessment task, which were analyzed using deductive and inductive content analysis. FINDINGS: Although both evidence for and threats to validity were mentioned, the threats refuting the assessment's validity prevailed. Evidence for the authenticity aspect was that the assessment task, conducting a team meeting, is common in practice. However, its validity was questioned because the assessment task appeared more structured as compared to practice. The most frequently mentioned threat to the scoring aspect was that the process of interprofessional collaboration between the students could not be evaluated sufficiently by means of this assessment task. CONCLUSIONS: This study showed that establishing interprofessional assessment validity requires three major balancing acts. The first is the balance between authenticity and complexity. As interprofessional practice and competencies are complex, interprofessional tasks require build-up or guidance toward this complexity and chaotic practice. The second is that between authenticity and scoring, in which optimal authenticity might lead to threats to scoring and vice versa. Simultaneous optimal authenticity and scoring seems impossible, requiring ongoing evaluation and monitoring of interprofessional assessment validity to ensure authentic yet fair assessments for all participating professions. The third balancing act is between team scoring and individual scoring. As interprofessional practice requires collaboration and synthesis of diverse professions, the team process is at the heart of solving interprofessional tasks. However, to stimulate individual accountability, the individual performance should not be neglected

    Relationships between medical students' co-regulatory network characteristics and self-regulated learning:a social network study

    Get PDF
    Contains fulltext : 233169.pdf (Publisher’s version ) (Open Access)Introduction: Recent conceptualizations of self-regulated learning acknowledge the importance of co-regulation, i.e., students' interactions with others in their networks to support self-regulation. Using a social network approach, the aim of this study is to explore relationships between characteristics of medical students' co-regulatory networks, perceived learning opportunities, and self-regulated learning. Methods: The authors surveyed 403 undergraduate medical students during their clinical clerkships (response rate 65.5%). Using multiple regression analysis, structural equation modelling techniques, and analysis of variance, the authors explored relationships between co-regulatory network characteristics (network size, network diversity, and interaction frequency), students' perceptions of learning opportunities in the workplace setting, and self-reported self-regulated learning. Results: Across all clerkships, data showed positive relationships between tie strength and self-regulated learning (ß = 0.095, p < 0.05) and between network size and tie strength (ß = 0.530, p < 0.001), and a negative relationship between network diversity and tie strength (ß = -0.474, p < 0.001). Students' perceptions of learning opportunities showed positive relationships with both self-regulated learning (ß = 0.295, p < 0.001) and co-regulatory network size (ß = 0.134, p < 0.01). Characteristics of clerkship contexts influenced both co-regulatory network characteristics (size and tie strength) and relationships between network characteristics, self-regulated learning, and students' perceptions of learning opportunities. Discussion: The present study reinforces the importance of co-regulatory networks for medical students' self-regulated learning during clinical clerkships. Findings imply that supporting development of strong networks aimed at frequent co-regulatory interactions may enhance medical students' self-regulated learning in challenging clinical learning environments. Social network approaches offer promising ways of further understanding and conceptualising self- and co-regulated learning in clinical workplaces.8 p
    corecore